Task: Offer Service Levels (SP)
Relationships
RolesPrimary: Additional: Assisting:
Outputs
Main Description

For services where obligations play a part (Obligation of effort and Obligation to deliver results), we recommend agreeing on the standard for the obligation in advance. Compliance with the standard may serve to indicate the performance level of the permanent test organisation. The standard for the obligation is laid down in so-called service levels in consultation with the client. A service level applies to a specific service (e.g. the service “execution of the test execution phase”). This does not mean that service levels have to be defined for all services. How service levels are established depends on the policy of the permanent test organisation. For instance, it can be decided to customise them. In this case, the service levels are established in close consultation between the test organisation and its client for each request. The consultation looks at what the client thinks important in terms of money, time or quality. It is then determined how it will be measured and which principles and preconditions will apply. An important starting point is clearly the test strategy selected.

Another way to identify service levels for test organisations is to define a number of possible options per standard service. E.g. for the service “setting up system test environment”, the three possible sets of service levels are X, Y and Z. Each of these sets comes with its own price tag and requirements for the client.

Below you will find a number of examples of service levels as identified in practice for different projects and services. It is specified per service level:

  • To which aspect it relates (e.g. the quality of the product of the service or the reliability of the service planning, or the costs for execution of the service)
  • A description of the service level (the obligation incurred expressed in measurable units)
  • The standard of the service level (the minimum performance to comply with the service level).

Aspect Service Level Standard of the service level
Quality Quantity and severity of the defects the organisation incorrectly failed to detect during the test For each severity class, the number of defects that is detected unjustly in the first three months after the test less than 3% of the total number of defects in the severity class.
Reliability Degree to which activities are executed conform the agreed plan and deadlines The maximum delay on the agreed milestones is 5% of the lead time of the milestones.
Response speed Speed at which new requests are tackled 1. Sub-process “registering assignment” completed within 1 work day.
2. Sub-process “intake” completed within 2 work days after registration. In case of requests to change the assignment, the client is notified of the consequences at the latest within 2 work days.
Knowledge retention Effort the supplier must invest in training the core team This is no more than 5% of the total number of test hours.
Cost reduction Average costs of test projects Test costs as a percentage of overall project costs is, on average for all projects <= 35%
Lead time reduction Average lead time of test projects Lead time of tests (on critical path) as a percentage of overall lead time of projects <= 20%


The following challenges and problems are associated with agreeing on service levels:

  • The definition of the aspects and service level. Which must be chosen and how to prevent overly unilateral service levels from being defined. (e.g. focusing exclusively on cost aspect and not other aspects)
  • The ability to recognise the chosen service level for the client and his organisation (the test techniques to be used, for example, don’t mean anything to most clients)
  • The inability to accurately measure the chosen service level
  • The fact that other parties usually have an impact on the degree to which the agreed service level is realised (e.g. speed of testing depends heavily on the quality of the delivered software)
  • Inadequate analysis of causes of the realisation of or failure to realise the service level
  • Lack of historical data, making it difficult to define hard standards for the service level
  • The standard for the service level is not realistic
  • Measuring the service level has become a goal in itself and not a means to measure the quality of the service.